专利摘要:
PURPOSE: A DRAM for high speed data access is provided, which enables data input/output in a high speed without regard to data access pattern. CONSTITUTION: The DRAM comprises a number of normal banks(100), and at least one cache bank(200,300) which have the same access method as the normal bank actually and selectively store data and the normal bank selected during a read operation. And a control unit(400) controls access to the normal bank and the cache bank when there is a continuous read command as to the selected normal bank. The normal bank(100) and the above cache bank(200,300) have the same cell array actually.
公开号:KR20040008709A
申请号:KR1020020042380
申请日:2002-07-19
公开日:2004-01-31
发明作者:국정훈;홍상훈;김세준
申请人:주식회사 하이닉스반도체;
IPC主号:
专利说明:

DRAM for high speed data access
[9] BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to a semiconductor memory device, and more particularly, to a dynamic random access memory (DRAM) device capable of inputting and outputting data at high speed using a multi-bank.
[10] In recent years, the operating speed of the CPU has been remarkably improved to exceed that of the DRAM. As a result, the operation speed of the DRAM is relatively slower than that of the CPU. have. In order to overcome this problem, various structures of DRAMs for inputting and outputting data at higher speeds have been developed.
[11] First, the access time of the DRAM is determined by the physical constants such as resistance and capacitance of the word line and the bit line. have. However, reducing the size of the unit cell array (Cell Array) to reduce the physical constants, such as resistance components or capacitance components, the access time is reduced, but this is a limiting method because it lowers the cell efficiency (Cell Efficiency).
[12] On the other hand, DRAMs, which consist of multiple banks and operate in interleave mode for high-speed input and output, have been developed.
[13] Normal bank DRAM using interleaved mode uses bank interleaving method to transfer a large amount of data within a certain time. It is divided into bank structures and a memory controller is continuously connected to each bank. To get the data. That is, even when data is output from one bank and restored, data is continuously output from neighboring banks so that the data appears to be output continuously without re-storing when viewed from the outside. Therefore, in the normal bank DRAM, each bank has its own low decoder and column decoder independently and operates independently of the other banks.
[14] However, even in a DRAM having a normal bank structure in an interleaving mode, data cannot be input / output at high speed when access is continuously concentrated in the same bank. That is, the access speed of the DRAM is greatly affected by the pattern of data input and output.
[15] Meanwhile, as another method, a structure for reducing access time during cache hit is proposed by integrating a static RAM cache bank operating at a relatively high speed into the DRAM.
[16] However, when DRAM is manufactured by integrating the SRAM Cash Bank, the total area of DRAM is greatly increased. This is because SRAM generally has four times larger area than DRAM of the same capacity. In addition, when a cache miss occurs during data access, data input / output is performed in the same manner as before, and this also has a problem that the access speed of the DRAM is greatly affected by the data access pattern.
[17] An object of the present invention is to provide a DRAM capable of data input / output at high speed regardless of a data access pattern in a DRAM for inputting / outputting data in an interleaving manner using a normal bank structure.
[1] 1 is a block diagram of a DRAM device according to a preferred embodiment of the present invention.
[2] FIG. 2 is a block diagram illustrating a normal bank unit and a cash bank unit in the DRAM of FIG.
[3] 3 is a block diagram illustrating a controller in the DRAM of FIG. 1; FIG.
[4] FIG. 4 is a circuit diagram illustrating a sensor amplifier unit provided in each bank in the DRAM of FIG. 1. FIG.
[5] 5 to 12 are waveform diagrams showing the operation of the DRAM shown in FIG.
[6] FIG. 13 is a circuit diagram for applying a power supply voltage multiplely to a data storage buffer in order to store data at high speed in the DRAM shown in FIG.
[7] 14 is a simulation waveform diagram at the time of data storage by the circuit diagram of FIG.
[8] FIG. 15 is a simulation waveform diagram showing the operation of the DRAM shown in FIG. 1; FIG.
[18] The present invention for achieving the above object is a plurality of normal banks; At least one cache bank having substantially the same access scheme as the normal bank and for selectively storing the selected normal bank and data during a read operation; And control means for controlling access to the normal bank and the cache bank when there is a continuous read command for the selected normal bank.
[19] The present invention also provides a plurality of normal banks; First and second cache banks having substantially the same access scheme as the normal bank; And outputting the data in an interleaved manner when there is an alternate read access to different normal banks, and outputting the corresponding data from the selected normal bank when there is continuous read access to one normal bank. A DRAM having control means for controlling movement to a first or second cache bank is provided.
[20] The present invention provides a method of continuously outputting data from a DRAM having a plurality of banks, the data being output by using an interleaving method of alternately outputting data from the first and second banks, and continuously from the first bank. In the case of outputting a plurality of data, the data is output to the outside from the first bank, and the data is moved to the provided cache bank unit. When the data is output again from the first bank, the cache bank unit outputs the data. It provides a DRAM driving method comprising a.
[21] The present invention provides two cache banks having the same structure as one bank in a DRAM having a normal bank structure, and when data is continuously accessed in the same bank, the next data is continuously accessed without performing a restoring operation, thereby providing fast access. Provides a possible DRAM. At this time, data that is not restored and destroyed is properly stored in two cache banks provided at the time of access to preserve the data.
[22] Hereinafter, preferred embodiments of the present invention will be introduced in order to enable those skilled in the art to more easily carry out the present invention.
[23] 1 is a block diagram of a DRAM according to a preferred embodiment of the present invention.
[24] Referring to FIG. 1, the DRAM according to the present invention has a plurality of normal banks 100 and an access method substantially the same as that of the normal bank 100, and selectively selects a selected normal bank (for example, Bank 0) and data during a read operation. To control access to the normal banks (eg Bank0) and the cache banks 200,300 when there are consecutive read commands to the at least one cache bank (200,300) and the selected normal bank (eg Bnak0) for storage. The control unit 400 is provided.
[25] In addition, the present invention includes two cache banks 200 and 300 having substantially the same access method as the normal bank, and the control unit 400 interleaves corresponding data when there are alternate read accesses to different normal banks. The control unit outputs the data, and when there is continuous read access to one normal bank, the control unit controls the data to be moved from the selected normal bank to the output and the first or second cache banks 200 and 300.
[26] FIG. 2 is a block diagram illustrating a normal bank unit 100 and a cash bank unit 600 in the DRAM illustrated in FIG. 1.
[27] Referring to FIG. 2, the normal bank unit 100 includes storage buffers 111, 113, and 15 that serve as data latches and buffers for storing data in unit cells of one bank corresponding to an address signal, and at the time of data output. Data output sense amplifiers 112, 114, and 116 for amplifying and outputting, and a plurality of banks (110 to 160) that can be independently input and output by the unit cell, sense amplifier and the like.
[28] In addition, the cache bank 600 latches data output from the first and second cache banks 200 and 300 and the normal bank unit 100 having the same structure as one bank of the normal bank unit 100. The normal bank by amplifying the data output from the latch (220, 240, 260), and the first and second cache bank (200, 300) for transferring to the first cache bank 200 or the second cache bank 300 according to the control signal of Data output sense amplifiers 210, 230, and 250 for outputting to the unit 100 or the outside. The first and second cachebanks have the same data storage capacity as the banks.
[29] In addition, multiplexers 117, 118, 119, 270, and 280 that operate in accordance with various control signals / WE, BA, and CA are provided for timing synchronization when data is stored in each of the banks Bank0 to BankN or the cache banks 200,300.
[30] FIG. 3 is a block diagram illustrating a controller of the DRAM illustrated in FIG. 1.
[31] Referring to FIG. 3, the controller 400 includes an address comparison unit 440 for comparing whether data corresponding to an address signal (bank address and row address) is in the first and second caches 200 and 300, and an address. The control unit 440 outputs signals CRR, CFR, and CFW for controlling data access of the first and second cache banks 200 and 300, or controls access to the normal bank unit 100. Access controller 450 for outputting control signals BRR, BFR, and BFW, and command decoder 420 for controlling the access controller 450 by receiving control signals / CS, / WE, and / OE. ).
[32] In addition, the address comparison unit 440 receives an address signal and includes a bank address ba corresponding to one of the plurality of banks and a cell address ra corresponding to one of a plurality of unit cells provided in one bank. An input unit 442 for dividing and outputting a bank and a comparison unit for receiving a bank address ba and a cell address ra and comparing the bank address and the cell address corresponding to the stored data of the cache bank unit 600 ( 441).
[33] The address comparison unit 440 may also include a first flip-flop 412 for outputting the bank address ba and the cell address ra output from the input unit 442 to the comparator 441 in synchronization with a clock ck. And a predecoder 430 for decoding the cell address ba output from the first flip-flop 412 and outputting the decoded cell address ba to the comparator 441. Compared with the second flip-flop 412 for synchronizing the cell address ba output from the predecoder 430 and the bank address ba output from the first flip-flop 412 with the clock ck. A third flip-flop 415 is further provided for latching an output signal (nexthit / miss) of the unit 441 and outputting in synchronization with a clock ck.
[34] Herein, the access controller 450 uses a signal (current hit / miss) output to the third flip-flop 413 as a determination signal for controlling the normal bank unit 100 and the cache bank unit 600 in the current clock. The first flip-flop 412 is used as a determination signal for controlling the normal bank unit 100 and the cash bank unit 600 at the next clock using the signal (next hit / miss) output from the comparator 441. The bank address (next ba) output from the subchannel is used as a bank address signal for data access at the next clock, and the bank address signal (current ba) output from the second flip-flop 413 is used for data access at the current clock. Used as a bank address for
[35] In addition, the controller 400 outputs the data control signals CRR, CFR, and CFW for controlling the normal bank unit 100 or the cache unit 600 from the access controller 450 and the second flip-flop 412. An output latch unit 460 is further provided for matching the output timing of the cell address ra and the bank address ba. The output latch unit 460 is composed of two flip-flops 416 and 417.
[36] In addition, the controller 400 controls the control signals / CS, / WE, / OE .. and the control signals / CS, / WE, / OE. To synchronize the output signals of the first flip-flop 412. 4) flip-flop 411 for latching and outputting to command decoder 420, and command decoder for synchronizing the output signal of command decoder 420 with the output signal of second flip-flop 413. And a fifth flip-flop 414 for latching an output signal of 420 and outputting the latched output signal to the access controller 450.
[37] FIG. 4 is a circuit diagram illustrating a sensor amplifier unit provided in each bank in the DRAM illustrated in FIG. 1.
[38] Referring to FIG. 4, the sense amplifier unit may include a sense amplifier for sensing and amplifying a signal applied to a bit line BL or BL connected to one unit cell among a plurality of unit cells constituting the cell array 500. 520, a precharge unit 510 for shorting or insulating the sense amplifier 520 and the cell array 500, and precharging the bit lines BL and / BL, and the cell array through the sense amplifier 520. And a data input unit 530 for providing a data path for storing data in a unit cell of 500, and a data output unit 540 for outputting a signal amplified by the sense amplifier 520.
[39] 5 through 12 are waveform diagrams illustrating an operation of the DRAM illustrated in FIG. 1. 1 to 12, the operation of the present invention will be described.
[40] Since the DRAM proposed in the present invention has a normal bank structure, if consecutive accesses are for different banks, data is outputted and restored in one bank by a 2-way interleaving operation, and in another bank at the timing of the restoration, Data is output continuously. Therefore, the access time of the DRAM which performs the interleaving operation as compared to the conventional DRAM is half of the conventional access time tRC.
[41] 5 is an operation waveform diagram of a DRAM relating to data output when no interleaving operation is performed when data is accessed by different banks, and FIG. 6 is a data output when interleaving operation is performed when data is accessed by different banks. Operational waveform diagram of DRAM related to.
[42] Referring to FIG. 5, when the first read command RD0 is input, the first data D0 corresponding to the first address AD0 is output from the first bank, and then the second read command. When RD1 is input, second data D1 corresponding to the second address AD1 is also output from the first bank. At this time, it can be seen that the time required for data output is 'tRR' including time to output and time to restore. Here, tRR represents the time required for data output in a general DRAM.
[43] Next, referring to FIG. 6, when the first read command RD0 is input, the first data D0 corresponding to the first address AD0 is outputted from the first bank when the first read command RD0 is input. When the second read command RD1 is input, second data D1 corresponding to the second address AD1 is output from the second bank. Subsequently, when the third read command RD2 is input, the third data D2 corresponding to the third address AD2 is output again from the first bank. In this case, since the first data D0 is immediately output and the second data D1 and the third data dat2 are continuously output, the time required for data output is 0.5 tRR. This is because data is continuously output from another bank during the time of restoring from one bank. As described above, when outputting data alternately between banks, the interleaving mode is used so that the time required for data output is 0.5tRR.
[44] However, as described above, when the data access pattern continuously accesses only one bank, even in the interleaving operation, the data output time is inevitably tRR.
[45] According to the present invention, two cache banks having the same structure as one bank of the normal bank unit are provided to maintain the access time at '1/2 tRR' or less even when consecutive accesses are for the same bank. We propose Micro Core Operation, a command for quick output.
[46] In general DRAM, the operation of lead commander consists of Word Line Activation-> Charge Sharing-> Sensing-> Restoring-> Pre-charging Lose.
[47] Micro-core operation proposed in the present invention is a fast read instruction (Word Read Activation) consisting of Word Activation (Word Line Activation)-> Charge Sharing-> Sensing-> Pre-charging operation TFR), and a fast write command (hereinafter referred to as tFW) consisting of Word Line Activation-> Restoring-> Pre-charging.
[48] First, when a read command and an address are input, the corresponding data is output by the operation of the fast read command tFR. Since the fast read command (tFR) does not restore the data, the bit line maintains the charge sharing state, and only the sense amplifier needs to be operated to read data continuously. That is, since there is no time for resave, data can be output within the '0.5tRR' time which is the data output time in the interleaving mode.
[49] At this time, the data read once is not stored in the cell and the data is destroyed. This data is stored in the cache bank provided by the operation of the fast storage instruction (tFW) at the moment of reading to preserve the data.
[50] On the other hand, since storing the data usually takes more time to read, the fast storing instruction (tFW) takes more time than the fast reading instruction (tFR). Therefore, according to the present invention, the DRAM is designed such that the condition of tFR <= tFW <= 1/2 tRR is satisfied.
[51] Fig. 7 is an operation waveform diagram when data is continuously output to one bank using the above-described fast read command and fast save command.
[52] Referring to FIG. 7, when the first read command RD0 is input, the first data D0 corresponding to the first cell address AD0 is output from the first bank. In this case, the fast read command tFR is output to the first bank. Accordingly, the data D0 is output without re-storing, and the data D0 is moved and stored in the cache bank provided according to the fast storage instruction tFW operation. The cell addresses AD0 to AD11 refer to addresses in one bank.
[53] Next, the second data D1 corresponding to the corresponding second cell address AD1 is output according to the input second read command RD1. At this time, since there is no need to continuously output data, the resave operation is performed according to the previous read command operation. Here, MAX (tFW, 0.5 * tRR) indicates that the operation time for fast storage command (tFW) is less than 0.5tRR.
[54] FIG. 8 is an operation waveform diagram when data is output from the cache bank to continuously output data from the first bank and then output the same data again and again. FIG. Shows.
[55] Referring to FIG. 8, when the first read command RD0 is input, the first data D0 corresponding to the first cell address AD0 is output from the first cache bank. Herein, the control unit 4 (00) receives the first cell address AD0, determines the cache bank hit or miss, and controls the first cache bank, so that the first data D0 is output from the first cache bank. Subsequently, second data D1 corresponding to the second cell address AD1 is also output from the first cache bank by the second read command RD1.
[56] At this time, when the second data D1 corresponding to the second cell address AD1 is also present in the first cache bank by the second read command RD1 continuously input after the output first data D0. Since the first data D0 is not available for re-storing, the first data D0 is outputted and the above-described fast storage instruction tFW is performed. By operation, the first data is moved back to the first bank which was originally stored, and then the second data D1 is output. The second data can be restored normally. Therefore, even when data is continuously output from the first cache bank, the data can be output within the '0.5 tRR' time.
[57] In order to perform the above operation, the comparator 440 of the controller 400 shown in FIG. 3 receives an address next ra and next ra related to the next operation, and determines a hit and a current hit for the current address. / miss) and the next address hit and miss judgment (next hit / miss) are processed at the same time and output to the controller 450, the controller 450 for the control signal (BRR, BFR, BRW) for the bank and the cash bank The control signals CRR, CFR, and CFW are output to the cache bank unit 600 and the normal bank unit 100 at the same time during the operation of each command.
[58] FIG. 9 is an operation waveform diagram showing interleaving between a bank and a cache bank when one data is output in a bank when one data is suppressed in a bank.
[59] Referring to FIG. 9, when the first read command RD0 is input, first data D0 corresponding to the first cell address AD0 is output from the first bank. Subsequently, when the second read command RD1 is input, the data D1 corresponding to the second cell address AD1 is output from the first cache bank. At this time, the data appears to be output continuously in 0.5tRR time even when the resave operation is performed without the need to operate the fast read command tFR and the fast save command tFW.
[60] Fig. 10 is an operational waveform diagram when four successive accesses occur in one bank, and then four successive accesses occur in the same bank. The first to eighth read commands RD0 to RD7 are all access commands for one bank.
[61] Referring to FIG. 10, when the first read command RD0 is input, the first data D0 corresponding to the first cell address AD0 is output from the first bank according to the fast read command tFR. At the same time, the first data D0 moves to the first cache bank according to the fast storage instruction tRW. Subsequently, the second and third data D1 and D2 according to the second read command RD0 and the third read command RD1 are output according to the fast read command tFR, and at the same time, the second and third data D1 and D2 are moved to the first cache bank according to the fast storage instruction tFW.
[62] Subsequently, when the fourth read command RD3 is input, the fourth data D3 corresponding to the fourth address AD3 is outputted from the bank. In this case, the read command that performs a normal resave operation instead of the fast read command tFR. It works according to. This is because the next data D0 may be directly output from the first cache bank since the next data to be output is in the first cache bank.
[63] Subsequently, according to the fifth read command RD4 and the sixth read command RD5, the first and second data D0 and D1 are output from the first cache bank according to the fast read command tFR, and at the same time, According to the storage command tFW, the first and second data D0 and D1 are moved back to the first bank.
[64] Subsequently, the third data dat2 is output from the first cache bank in the seventh read command RD6, and the fourth data D3 is output from the first bank in the eighth read command RD7. In the seventh and eighth read commands RD6 and RD7, the fast read command tFR and the fast save command tFW do not need to be used. Each of the data dat2 and dat3 is stored in the first bank and the first cache bank. Because it is stored, it can be output by interleaving operation.
[65] Therefore, even during the operation of continuously outputting data from one bank, the data is always outputted every 0.5tRR time from the outside.
[66] Fig. 11 is a waveform diagram showing an operation of accessing four consecutive data in one bank and four consecutive data in another bank in succession, and at least two cache banks are required at this time.
[67] Referring to FIG. 11, when the first read command RD0 is input, the first data D0 corresponding to the first cell address AD0 is output from the first bank according to the fast read command tFR. At the same time, the first data D0 moves to the first cache bank according to the fast storage instruction tRW. Subsequently, the second and third data D1 and D2 according to the second read command RD0 and the third read command RD1 are output according to the fast read command tFR, and at the same time, the second and third data D1 and D2 are moved to the first cache bank according to the fast storage instruction tFW. Subsequently, when the fourth read command RD3 is input, the fourth data D3 corresponding to the fourth address AD3 is output from the bank. In this case, the fourth read command RD3 is stored again according to the normal read command rather than the fast read command tFR. The operation is made. Up to this point, the operation is performed as shown in FIG.
[68] Subsequently, when the fifth read command RD4 is input, the fifth data D4 corresponding to the first address AD0 of the second bank is output from the second bank according to the fast read command tFR, and at the same time, the high speed storage is performed. According to the instruction tRW, the fifth data D4 moves to the first cache bank. (E of FIG. 11) At this time, the address AD0 of the first cache bank to which the fifth data D4 moves is Since the first data D0 is already stored, the first data D0 is moved back from the first cash bank to the first bank before the fifth read command RD4 is executed. )
[69] Subsequently, the sixth and seventh data D4 and D5 corresponding to the addresses AD1 and AD2 of the second bank according to the sixth read command RD5 and the seventh read command RD6 operate on the fast read command tFR. The sixth data D5 is moved to the second cache bank according to the fast storage instruction tFW, and the seventh data D6 is transferred to the second cache bank according to the fast storage instruction tFW. It is stored as one cache bank. (D in FIG. 11) Here, since the data D1 corresponding to the same address A1 in the first bank already exists in the first cache bank, the sixth data D5 is stored in the first cache bank. It is not moved to the cache bank but to the second cache bank (Fig. 11B). The seventh data D6 is also because the third data D2 has been moved back to the first bank (C in Fig. 11). ) Can be moved to the first cash bank.
[70] Two cache banks are required for data access as described above, and if only two cache banks are provided, the DRAM according to the present invention can always output data at 0.5tRR time regardless of the data pattern. .
[71] Fig. 12 is a waveform diagram showing an operation when outputting data stored in each bank at 0.5tRR intervals when data is successively accessed in three banks.
[72] Referring to FIG. 12, first, data D0 to D2 corresponding to the cell addresses AD0 to AD2 according to the first to third read commands RD0 to RD2 are first banks according to the fast read command tFR. The data D0 to D2 are moved to the first cache bank according to the high speed storage command tRW. Subsequently, the fourth data D3 corresponding to the fourth cell address AD3 according to the fourth read command RD3 is output from the first bank. In this case, the fourth data D3 is outputted according to the normal read command instead of the fast read command tFR. The save operation is made.
[73] Subsequently, the data D4 to D6 corresponding to the cell addresses AD0 to AD2 according to the fifth to seventh read commands RD4 to RD7 are output from the second bank according to the fast read command tFR, and at the same time The data D0 to D2 move to the second cache bank according to the fast storage command tRW. Subsequently, the eighth data D7 corresponding to the fourth cell address AD3 in the second bank according to the eighth read command RD7 is output from the second bank. In this case, the normal read is not the fast read command tFR. The restore operation is performed according to the command.
[74] Subsequently, when the ninth read command RD8 is input, the ninth data D8 corresponding to the first cell address AD0 of the third bank is output from the third bank according to the fast read command tFR, and at the same time, According to the storage instruction tRW, the ninth data D8 moves to the first cache bank. (E of FIG. 12) At this time, the cell address AD0 of the first cache bank to which the ninth data D4 is to be moved. Since the first data D0 is already stored, the first data D0 is moved back from the first cache bank to the first bank before the ninth read command RD8 is executed. A)
[75] Subsequently, the tenth and eleventh data D9 and D10 corresponding to the cell addresses AD9 and AD10 of the third bank according to the tenth read command RD9 and the eleventh read command RD10 are fast read command tFR. According to the operation, the third bank is output, and at the same time, the tenth data D9 is moved to the second cache bank (F in FIG. 12) according to the fast storage command tFW, and the eleventh data D10 is stored at high speed. The first cache bank is stored according to the instruction tFW.
[76] Here, since the data D1 corresponding to the cell address A1 of the third bank is already present in the first cache bank, the tenth cell data D9 does not move to the first cache bank, but rather to the second cache bank. (B in Fig. 12) However, at this time, since the data D5 of the second bank corresponding to the cell address AD1 of the third bank is already stored in the second cache bank, The data D5 needs to be moved back to the second bank. On the other hand, at this time, the eleventh data D10 can move to the first cash bank because the third data D2 has been moved back to the first bank (Fig. 12C).
[77] As described above, two cache banks are required to have three banks and continuous data access to each bank. If two cache banks are provided, the data is always taken out at 0.5 tRR time regardless of the data pattern. You can print it.
[78] In other words, even if the number of banks increases or the data access pattern becomes more complicated, if there are only two cache banks, data can be continuously output to the outside at 0.5 tRR time.
[79] On the other hand, when data '1' is generally stored, data is longer written to a cell than when data '0' is stored or data is read. Therefore, the present invention proposes a method of reducing the storage time by applying a high power supply voltage to the buffer of the storage path when storing the data '1'.
[80] FIG. 13 is a circuit diagram of applying a power supply voltage to a data storage buffer multiplely in order to store data at high speed in the DRAM shown in FIG.
[81] Referring to FIG. 13, a unit cell 710 including one transistor and a capacitor, and a second power source having a voltage greater than the first power source VDD_core and the first power source to store data in the unit cell at high speed. A data input buffer (BUF) 720 capable of selectively applying VDD_peri, and a connection unit 730 connecting the unit cell 710 and the data input buffer BUF.
[82] Normally, the second power source VDD_core is input to the input buffer BUF. However, when a high speed storage signal is input to the input buffer BUF, a voltage higher than the second power source VDD_peri is input to the input buffer BUF. 1 power (VDD_core) is applied to improve the loading capacity of the input buffer (BUF), the data can be stored through the path X accordingly.
[83] 14 is a simulation waveform diagram at the time of data storage by the circuit diagram of FIG.
[84] Referring to FIG. 14, the input buffer BUF is applied to the second power supply VDD_peri during the first period, and the first power supply VDD_core is applied during the first period to the word line enable period, thereby rapidly giving data to the cell. Can be stored.
[85] FIG. 15 shows a simulation waveform diagram for the above-described fast read command shown in the operation of the DRAM shown in FIG.
[86] Referring to FIG. 15, data '1' is stored in cell 'a' and data '0' is stored in cell 'b', and word lines WL a and WL b are enabled. After the data stored in the bit lines BL and / BL is applied, the operation of restoring does not occur. Therefore, the data of '1' is not returned to the cell 'a' and the data of '0' is reset to the cell 'b'. It can be seen that it does not return.
[87] According to the present invention, even if a bank is changed irregularly during data access, continuous data output is possible at high speed by using two cache banks and a normal bank. Therefore, data can be accessed at high speed regardless of the data access pattern, and since the two cache banks have the same structure as the unit cell of the DRAM, there is no significant burden in terms of the area of the DRAM.
[88] The present invention described above is not limited to the above-described embodiments and the accompanying drawings, and various substitutions, modifications, and changes can be made without departing from the spirit and scope of the present invention. It will be apparent to those who have knowledge.
[89] According to the present invention, it is possible to provide a normal bank DRAM capable of high-speed data input and output at all times regardless of a data access pattern.
权利要求:
Claims (11)
[1" claim-type="Currently amended] Multiple normal banks;
At least one cache bank having substantially the same access scheme as the normal bank and for selectively storing the selected normal bank and data during a read operation; And
And control means for controlling access to the normal bank and the cache bank when there are consecutive read commands for the selected normal bank.
[2" claim-type="Currently amended] The method of claim 1,
And the normal bank and the cache bank have substantially the same cell array.
[3" claim-type="Currently amended] Multiple normal banks;
First and second cache banks having substantially the same access scheme as the normal bank; And
If there are alternating read accesses to different normal banks, the corresponding data is output in an interleaved manner. If there is a continuous read access to one normal bank, the corresponding data is output from the selected normal bank and the first data is outputted. Control means for controlling movement to the first or second cache bank
DRAM having a.
[4" claim-type="Currently amended] The method of claim 3, wherein
The control means,
And a read access to the corresponding data of the selected normal bank to output the data moved to the first or second cache bank to the output and the selected normal bank.
[5" claim-type="Currently amended] The method of claim 1,
The control means
An address comparison unit for comparing whether data corresponding to an address signal is in the cache bank;
An access controller for controlling data access of the cache bank or controlling data access of the normal bank according to a result compared by the address comparison unit; And
And a command decoder for controlling the access controller.
[6" claim-type="Currently amended] The method of claim 5, wherein
The address comparison unit
An input unit for receiving the address signal and dividing the bank address into one of a plurality of bank addresses and a cell address corresponding to one of a plurality of unit cells in one bank; And
And a comparator for receiving the bank address and the cell address and comparing the bank address and the cell address with the stored data of the cache bank.
[7" claim-type="Currently amended] The method of claim 6
First flip-flop means for outputting the bank address and the cell address output from the input unit to the comparison unit in synchronization with a clock;
A predecoder for decoding the cell address output from the first flip-flop and outputting the cell address to the comparator;
Second flip-flop means for outputting the cell address output from the predecoder and the bank address output from the first flip-flop in synchronization with the clock; and
And a third flip-flop means for latching an output signal of the comparator and synchronizing the clock with the clock.
[8" claim-type="Currently amended] The method of claim 7, wherein
The access controller
Using the signal output to the third flip-flop as a determination signal for controlling the normal bank and the cash bank in the current clock,
The signal output from the comparator is used as a determination signal for controlling the normal bank and the cache bank at a next clock.
A bank address output from the first flip-flop is used as a bank address signal for data access at a next clock,
And using a bank address signal output from the second flip-flop as a bank address for data access in the current clock.
[9" claim-type="Currently amended] The method of claim 7, wherein
And a data control signal for controlling the normal bank or the cache bank in the access controller, and an output latch unit for matching the output timing of the cell address and the bank address output from the second flip-flop. DRAM.
[10" claim-type="Currently amended] The method of claim 9,
Fourth flip-flop means for outputting an input control signal to the command decoder in synchronization with an output signal of the first flip-flop; And
And a fifth flip-flop means for latching the output signal of the command decoder and outputting the output signal of the command decoder to the access controller so as to synchronize the output signal of the command decoder with the output signal of the second flip-flop. DRAM
[11" claim-type="Currently amended] The method of claim 1,
The plurality of banks each include a plurality of sense amplifier units for amplifying a signal stored in a unit cell.
The sense amplifier unit
A sense amplifier for sensing and amplifying a signal applied to a bit line connected to one unit cell among a plurality of unit cells in the bank;
Precharge means for shorting or insulating the sense amplifier and the unit cell or precharging the bit line;
Data input means for providing a data path for storing data in the unit cell through the sense amplifier; And
And data output means for providing a data path for outputting data stored in the unit cell amplified by the sense amplifier.
类似技术:
公开号 | 公开日 | 专利标题
US9286161B2|2016-03-15|Memory system and method using partial ECC to achieve low power refresh and fast access to data
JP2017182854A|2017-10-05|Semiconductor device
US6134180A|2000-10-17|Synchronous burst semiconductor memory device
US7342841B2|2008-03-11|Method, apparatus, and system for active refresh management
US7076601B2|2006-07-11|Memory controller and data processing system
EP1993099B1|2010-09-15|Memory device, memory controller and memory system
US6788616B2|2004-09-07|Semiconductor memory device
US6937535B2|2005-08-30|Semiconductor memory device with reduced data access time
EP0492776B1|1998-05-13|A semiconductor memory device with a large storage capacity memory and a fast speed memory
US7275200B2|2007-09-25|Transparent error correcting memory that supports partial-word write
US7085881B2|2006-08-01|Semiconductor memory device
EP1191538B1|2007-12-19|A synchronous NAND DRAM architecture
US6335904B1|2002-01-01|Semiconductor memory system, and access control method for semiconductor memory and semiconductor memory
US6473828B1|2002-10-29|Virtual channel synchronous dynamic random access memory
US5226009A|1993-07-06|Semiconductor memory device supporting cache and method of driving the same
US8509021B2|2013-08-13|Methods, circuits, and systems to select memory regions
US6829682B2|2004-12-07|Destructive read architecture for dynamic random access memories
JP2894170B2|1999-05-24|Memory device
US7082075B2|2006-07-25|Memory device and method having banks of different sizes
US6404691B1|2002-06-11|Semiconductor memory device for simple cache system
US7319631B2|2008-01-15|Semiconductor memory device with a stacked-bank architecture and method for driving word lines of the same
US7272070B2|2007-09-18|Memory access using multiple activated memory cell rows
US5111386A|1992-05-05|Cache contained type semiconductor memory device and operating method therefor
US7082491B2|2006-07-25|Memory device having different burst order addressing for read and write operations
US6958507B2|2005-10-25|Semiconductor memory pipeline buffer
同族专利:
公开号 | 公开日
US7277977B2|2007-10-02|
TWI297153B|2008-05-21|
CN1469391A|2004-01-21|
US20040015646A1|2004-01-22|
TW200402057A|2004-02-01|
CN100345216C|2007-10-24|
JP2004055112A|2004-02-19|
JP4759213B2|2011-08-31|
KR100541366B1|2006-01-16|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题
法律状态:
2002-07-19|Application filed by 주식회사 하이닉스반도체
2002-07-19|Priority to KR20020042380A
2004-01-31|Publication of KR20040008709A
2006-01-16|Application granted
2006-01-16|Publication of KR100541366B1
优先权:
申请号 | 申请日 | 专利标题
KR20020042380A|KR100541366B1|2002-07-19|2002-07-19|DRAM for high speed Data access|
[返回顶部]